Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LSP Optimization: Use rayon for parse_ast_to_typed_tokens function #5473

Merged
merged 8 commits into from
Jan 17, 2024

Conversation

JoshuaBatty
Copy link
Member

@JoshuaBatty JoshuaBatty commented Jan 16, 2024

Description

Traversing the typed tokens from sway-lib-std now goes from 3.957833ms1.372ms. Collecting typed tokens on the benchmark project goes from 139.6ms127.7ms.

I noticed that there are transient contentions when reading items from the TokenMap with this change. As such, I've implemented a new function on the TokenMap called try_get_mut_with_retry. This method tries to access the value up to 8 times if the lock is still held using the backoff and retry pattern.

related to #5445

Checklist

  • I have linked to any relevant issues.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation where relevant (API docs, the reference, and the Sway book).
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added (or requested a maintainer to add) the necessary Breaking* or New Feature labels where relevant.
  • I have done my best to ensure that my PR adheres to the Fuel Labs Code Review Standards.
  • I have requested a review from the relevant team or maintainers.

@JoshuaBatty JoshuaBatty self-assigned this Jan 16, 2024
@JoshuaBatty JoshuaBatty added the language server LSP server label Jan 16, 2024
@JoshuaBatty JoshuaBatty requested review from a team January 16, 2024 01:47
@JoshuaBatty JoshuaBatty enabled auto-merge (squash) January 16, 2024 01:48
@JoshuaBatty JoshuaBatty marked this pull request as draft January 16, 2024 02:47
auto-merge was automatically disabled January 16, 2024 02:47

Pull request was converted to draft

@tritao
Copy link
Contributor

tritao commented Jan 16, 2024

How come collecting typed tokens is taking 127.7ms? Or does that figure include parsing and type checking already?

@JoshuaBatty
Copy link
Member Author

JoshuaBatty commented Jan 17, 2024

How come collecting typed tokens is taking 127.7ms? Or does that figure include parsing and type checking already?

This is how long it's taking to traverse and collect the tokens from the typed AST nodes. I'm going to look at using rayon par_iters for this section like we are doing in the traverse::parsed_tree module. Should hopefully take us down closer to 12ms or so.

Edit, i've managed to get it down to 14.8ms in this PR -> #5487

@JoshuaBatty JoshuaBatty requested review from crodas and a team January 17, 2024 03:54
@JoshuaBatty JoshuaBatty marked this pull request as ready for review January 17, 2024 03:54
@JoshuaBatty JoshuaBatty enabled auto-merge (squash) January 17, 2024 11:13
@JoshuaBatty JoshuaBatty merged commit 8a4389e into master Jan 17, 2024
35 checks passed
@JoshuaBatty JoshuaBatty deleted the josh/parse_ast_to_typed_tokens_rayon branch January 17, 2024 17:35
JoshuaBatty added a commit that referenced this pull request Jan 17, 2024
…5487)

## Description

Use `rayon` par_iters in the `traverse::typed_tree` module. We are
already doing this in `traverse::parsed_tree`, this just brings this
over to the typed_tree module as well.

Branched off #5473. <s>Will want to merge that first then will take this
out of draft.</s>

Collecting typed tokens on the benchmark project goes from
`120.508917ms` → `14.853125ms`

related to #5445
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants